As more and more artificial intelligence (AI) technologies move from the laboratory to real-world applications, the open-set and robustness challenges brought by data from the real world have received increasing attention. Data augmentation is a widely used method to improve model performance, and some recent works have also confirmed its positive effect on the robustness of AI models. However, most of the existing data augmentation methods are heuristic, lacking the exploration of their internal mechanisms. We apply the explainable artificial intelligence (XAI) method, explore the internal mechanisms of popular data augmentation methods, analyze the relationship between game interactions and some widely used robustness metrics, and propose a new proxy for model robustness in the open-set environment. Based on the analysis of the internal mechanisms, we develop a mask-based boosting method for data augmentation that comprehensively improves several robustness measures of AI models and beats state-of-the-art data augmentation approaches. Experiments show that our method can be widely applied to many popular data augmentation methods. Different from the adversarial training, our boosting method not only significantly improves the robustness of models, but also improves the accuracy of test sets. Our code is available at \url{https://github.com/Anonymous_for_submission}.
translated by 谷歌翻译
Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone V2C) task aims to generate speeches that match the speaker's emotion presented in the video using the desired speaker voice as reference. V2C is more challenging than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the varying emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dubbing architecture to tackle these problems via hierarchical prosody modelling, which bridges the visual information to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip movement to the speech duration, and convey facial expression to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by recent psychology findings. Moreover, we design an emotion booster to capture the atmosphere from global video scenes. All these embeddings together are used to generate mel-spectrogram and then convert to speech waves via existing vocoder. Extensive experimental results on the Chem and V2C benchmark datasets demonstrate the favorable performance of the proposed method. The source code and trained models will be released to the public.
translated by 谷歌翻译
Motion prediction is highly relevant to the perception of dynamic objects and static map elements in the scenarios of autonomous driving. In this work, we propose PIP, the first end-to-end Transformer-based framework which jointly and interactively performs online mapping, object detection and motion prediction. PIP leverages map queries, agent queries and mode queries to encode the instance-wise information of map elements, agents and motion intentions, respectively. Based on the unified query representation, a differentiable multi-task interaction scheme is proposed to exploit the correlation between perception and prediction. Even without human-annotated HD map or agent's historical tracking trajectory as guidance information, PIP realizes end-to-end multi-agent motion prediction and achieves better performance than tracking-based and HD-map-based methods. PIP provides comprehensive high-level information of the driving scene (vectorized static map and dynamic objects with motion information), and contributes to the downstream planning and control. Code and models will be released for facilitating further research.
translated by 谷歌翻译
随着可解释的人工智能(XAI)的快速发展,过去的一系列工作表明,基于扰动后的HOC XAI模型中对分布外(OOD)问题的担忧和解释在社会上是错误对准的。我们探讨了使用近似值来模仿黑盒模型的行为的事后解释方法的局限性。然后,我们提出了基于解释的反事实再培训(XCR),提取迅速提取的特征。 XCR应用了XAI模型生成的解释作为反事实输入,以重新培训黑框模型来解决OOD和社会错位问题。对流行图像数据集的评估表明,XCR只能保留12.5%的最关键功能而不更改黑框模型结构时,可以改善模型性能。此外,对腐败数据集基准的评估表明,XCR对改善模型鲁棒性非常有帮助,并积极影响OOD问题的校准。即使没有像某些OOD校准方法那样在验证集中进行校准,但损坏的数据度量标准的表现优于现有方法。如果应用了验证集上的校准,我们的方法还可以在OOD校准度量上使用当前的OOD校准方法。
translated by 谷歌翻译
检测到分布(OOD)样本对于在现实世界中的分类器的安全部署至关重要。但是,已知深层神经网络对异常数据过于自信。现有作品直接设计得分功能,通过挖掘分别分类器(ID)和OOD的不一致性。在本文中,我们基于以下假设,即对ID数据进行训练的自动编码器无法重建OOD和ID,我们进一步补充了这种不一致性。我们提出了一种新颖的方法,读取(重建误差聚合检测器),以统一分类器和自动编码器的不一致。具体而言,原始像素的重建误差转换为分类器的潜在空间。我们表明,转换后的重建误差桥接了语义差距,并从原始的传承了检测性能。此外,我们提出了一种调整策略,以根据OOD数据的细粒度表征来减轻自动编码器的过度自信问题。在两种情况下,我们分别提出了方法的两个变体,即仅基于预先训练的分类器和读取 - 读取器(欧几里得距离),即读取MD(Mahalanobis距离),该分类器重新训练分类器。我们的方法不需要访问测试时间数据以进行微调超参数。最后,我们通过与最先进的OOD检测算法进行了广泛的比较来证明所提出的方法的有效性。在CIFAR-10预先训练的WideresNet上,我们的方法将平均FPR@95TPR降低了9.8%,而不是先前的最新ART。
translated by 谷歌翻译
嘈杂的标签损坏了深网络的性能。为了稳健的学习,突出的两级管道在消除可能的不正确标签和半监督培训之间交替。然而,丢弃观察到的标签的部分可能导致信息丢失,尤其是当腐败不是完全随机的时,例如依赖类或实例依赖。此外,从代表性两级方法Dividemix的训练动态,我们确定了确认偏置的统治:伪标签未能纠正相当大量的嘈杂标签,因此累积误差。为了充分利用观察到的标签和减轻错误的校正,我们提出了强大的标签翻新(鲁棒LR)-a新的混合方法,该方法集成了伪标签和置信度估计技术来翻新嘈杂的标签。我们表明我们的方法成功减轻了标签噪声和确认偏差的损害。结果,它跨数据集和噪声类型实现最先进的结果。例如,强大的LR在真实世界嘈杂的数据集网络VIVION上以前最好的绝对高度提高了4.5%的绝对顶级精度改进。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
We aim to bridge the gap between our common-sense few-sample human learning and large-data machine learning. We derive a theory of human-like few-shot learning from von-Neuman-Landauer's principle. modelling human learning is difficult as how people learn varies from one to another. Under commonly accepted definitions, we prove that all human or animal few-shot learning, and major models including Free Energy Principle and Bayesian Program Learning that model such learning, approximate our theory, under Church-Turing thesis. We find that deep generative model like variational autoencoder (VAE) can be used to approximate our theory and perform significantly better than baseline models including deep neural networks, for image recognition, low resource language processing, and character recognition.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译